Fano's inequality for random variables
نویسندگان
چکیده
We extend Fano’s inequality, which controls the average probability of (disjoint) events in terms of the average of some Kullback-Leibler divergences, to work with arbitrary [0, 1]–valued random variables. Our simple two-step methodology is general enough to cover the case of an arbitrary (possibly continuously infinite) family of distributions as well as [0, 1]–valued random variables not necessarily summing up to 1. Several novel applications are provided, in which the consideration of random variables is particularly handy. The most important applications deal with the problem of Bayesian posterior concentration (minimax or distribution-dependent) rates and with a lower bound on the regret in non-stochastic sequential learning. We also improve in passing some earlier fundamental results: in particular, we provide a simple and enlightening proof of the refined Pinsker’s inequality of Ordentlich and Weinberger [2005] and derive a sharper Bretagnolle and Huber [1978, 1979] inequality. MSC 2000 subject classifications. Primary-62B10; secondary-62F15, 68T05.
منابع مشابه
Information - Theoretic Limits for Density Estimation
This paper is concerned with the information-theoretical limits of density estimation for Gaussian random variables with data drawn independently and with identical distributions. We apply Fano’s inequality to the space of densities and an arbitrary estimator. We derive necessary conditions on the sample size for reliable density recovery and for reliable density estimation. These conditions ar...
متن کاملA Finite-Sample, Distribution-Free, Probabilistic Lower Bound on Mutual Information
For any memoryless communication channel with a binary-valued input and a one-dimensional real-valued output, we introduce a probabilistic lower bound on the mutual information given empirical observations on the channel. The bound is built on the Dvoretzky-Kiefer-Wolfowitz inequality and is distribution free. A quadratic time algorithm is described for computing the bound and its corresponding...
متن کاملInformation-theoretic limits of Bayesian network structure learning
In this paper, we study the informationtheoretic limits of learning the structure of Bayesian networks (BNs), on discrete as well as continuous random variables, from a finite number of samples. We show that the minimum number of samples required by any procedure to recover the correct structure grows as Ω (m) and Ω ( k logm+ k 2 /m ) for non-sparse and sparse BNs respectively, where m is the n...
متن کاملInformation-theoretic lower bounds on learning the structure of Bayesian networks
In this paper, we study the information-theoretic limits of learning the structure of Bayesian networks (BNs), on discrete as well as continuous random variables, from a finite number of samples. We show that the minimum number of samples required by any procedure to recover the correct structure grows as Ω (m) and Ω (k logm+ k/m) for non-sparse and sparse BNs respectively, where m is the numbe...
متن کاملXiv. Processing and Transmission of Information A. Lower Bounds on the Tails of Probability Distributions
Many problems in the Transmission of Information involve the distribution function of the sums of many random variables evaluated far from the mean. In these situations, a direct application of the Central Limit Theorem is virtually useless as an estimate of the distribution function. The so-called Chernov bound,1 derived here in Eq. 13, turns out to be much more useful both as an upper bound a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1702.05985 شماره
صفحات -
تاریخ انتشار 2017